Program Committee

Keynote Speaker 1
Prof. Celimuge Wu, The University of Electro-Communications, Japan
Short-bio: Celimuge Wu received his PhD degree from The University of Electro-Communications, Japan. He is currently a professor and the director of Meta-Networking Research Center, The University of Electro-Communications. His research interests include Semantic Communications, Vehicular Networks, Edge Computing, IoT, and AI for Wireless Networking and Computing. He serves as an associate editor of IEEE Transactions on Networking, IEEE Transactions on Cognitive Communications and Networking, IEEE Transactions on Network Science and Engineering, and IEEE Transactions on Green Communications and Networking. He is Vice Chair (Asia Pacific) of IEEE Technical Committee on Big Data (TCBD). He is a recipient of 2021 IEEE Communications Society Outstanding Paper Award, 2021 IEEE Internet of Things Journal Best Paper Award, IEEE Computer Society 2020 Best Paper Award and IEEE Computer Society 2019 Best Paper Award Runner-Up. He is an IEEE Vehicular Technology Society Distinguished Lecturer. He is a Foreign Fellow of The Engineering Academy of Japan (EAJ), and a Fellow of Asia-Pacific Artificial Intelligence Association (AAIA).
Speech Title: Low Latency Semantic Communication System for Remote Driving with Generative AI
Abstract: The explosive growth of multimedia data, the continuous surge in the number of connected devices, and the increasing demand for real-time intelligent applications are posing unprecedented challenges to current communication infrastructures. Traditional communication systems that transmit raw or compressed data often suffer from excessive latency and bandwidth inefficiency, which can be critical in delay-sensitive applications such as remote driving. To overcome these limitations, semantic communications have recently emerged as a paradigm shift that focuses on transmitting the meaning of data rather than the raw data itself. This talk introduces a novel low-latency video semantic communication framework tailored for remote driving scenarios. In contrast to conventional video transmission methods, the proposed system employs an asymmetric encoder–decoder architecture that transmits only a minimal number of bits by leveraging semantic feature extraction, while reconstructing high-quality video at the receiver through generative AI techniques. To validate its effectiveness, we design and implement a prototype system that seamlessly integrates semantic feature extraction, efficient transmission, and deep learning–based video reconstruction at the receiver side.
Keynote Speaker 2
Prof. Jixin Ma, University of Greenwich, UK
Short-bio: Jixin Ma is a Professor of Computer Science (Artificial Intelligence) and Director of PhD/MPhil Programme at the School of Computing and Mathematical Sciences, Faculty of Engineering and Science, University of Greenwich. He has also been the Director of the Centre for Computer and Computational Science (2011-18) and Leader of Artificial Intelligence Research Group (2008–2018). His main research areas include Artificial Intelligence, Data Science, and Information Systems, with special interests in Temporal Representation and Reasoning, Information Security, Machine Learning and Case-Based Reasoning. He has published 200+ research papers in top-tier journals including Information Fusion, Information Sciences, IEEE IoT, IEEE Trans on Cloud Computing, IEEE Access, Neural Computing and Applications, Information Systems, Neurocomputing, Engineering Applications of Artificial Intelligence, the Computer Journal, Applied Ontology, Artificial Intelligence Review, etc., and at esteemed international conferences such as AAAI, IJCAI and ECAI, etc. He has won 6 Best Papers / Student Paper awards and as Principal / Co-Investigator, received research grants worth more than £330,000. He obtained his BSc and MSc of Mathematics in 1982 and 1988, respectively, and PhD of Computer Sciences in 1994. Before his professorship appointment, he was a Reader (2007-2019), Senior Lecturer (1997-2006), and Research Fellow and Lecture (1994 – 1996), at University of Greenwich, U.K. He is also a Visiting Professor at Beijing Normal University, Anhui University, Hainan University, Zhengzhou Light Industrial University and Macau City University.
Keynote Speaker 3
Prof. Geyong Min, University of Exeter, UK
Short-bio: Professor Geyong Min is a Chair in High Performance Computing and Networking in the Department of Computer Science at the University of Exeter, UK. His research interests include Computer Networks, Cloud and Edge Computing, Mobile and Ubiquitous Computing, Systems Modelling and Performance Engineering. His recent research has been supported by Horizon Europe, UKRI, EPSRC, Royal Society, Royal Academy of Engineering, and industrial partners. He has published more than 200 research papers in leading international journals including IEEE/ACM Transactions on Networking, IEEE Journal on Selected Areas in Communications, IEEE Transactions on Computers, IEEE Transactions on Parallel and Distributed Systems, and IEEE Transactions on Wireless Communications, and at reputable international conferences, such as SIGCOMM-IMC, INFOCOM, and ICDCS. He is an Associated Editor of several international journals, e.g., IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Computers, and IEEE Transactions on Cloud Computing. He served as the General Chair or Program Chair of a number of international conferences in the area of Information and Communications Technologies.
Speech Title: Scalable AI at the Edge: Architectures, Optimization, and the Road Ahead
Abstract: Intelligent edge networks are increasingly expected to serve as a core infrastructure for scalable AI, enabling real-time intelligence across diverse environments and application domains. In this keynote, we will discuss architectural and optimization perspectives for enabling scalable AI at the edge. We first present a cooperative edge server deployment architecture that reduces infrastructure overhead through coordinated resource provisioning. Building on this foundation, we introduce a hybrid deployment paradigm that integrates static and mobile edge servers, improving spatial coverage and computational efficiency under dynamic workloads. To further enhance scalability at the service level, we present a task-aware, fine-grained service placement mechanism at intelligent edge networks for performance optimization. Looking ahead, the talk highlights emerging challenges and opportunities at the intersection of edge systems and modern AI models.
Keynote Speaker 4
Prof. Mohammed Al-qaness, Zhejiang Normal University
Short-bio: Mohammed Al-qaness received his Ph.D. in Engineering, Information and Communication Engineering, Wuhan University of Technology, 2017. He is currently a professor at the School of Physics and Electronic Information Engineering, Zhejiang Normal University. His current research interests include human sensing, including human activity recognition (sensor-based human activity recognition, human activity recognition based on WiFi signals), gesture recognition, optimization algorithms, and signal and image processing. He has published over 150 SCI papers in top-tier journals. His works have been cited over 11600 times. He has been recognized on the World's Top 2% Scientists list for a lifetime in 2024 and 2025, and the World's Top 2% Scientists list for a single year from 2021 to 2025 (this list is released by Stanford University’s Prof. John P. A. Ioannidis and Elsevier's Mendeley Data).
Speech Title: Multi-Scale AI for Human Sensing: From Wearable Sensors to Ambient Radio Waves
Abstract: The recognition and interpretation of human activities and gestures are essential for a wide range of applications, including healthcare, smart environments, and human-computer interaction. This talk explores the application of multi-scale artificial intelligence (AI) in human sensing, focusing on the use of wearable sensors and ambient radio waves ( such as WiFi). By leveraging multi-scale AI techniques, we process data at various levels—from low-level signal processing to high-level activity recognition and gesture classification. Wearable sensors, such as accelerometers and gyroscopes, provide detailed insights into physical movements, while ambient radio waves, like WiFi, offer a more passive yet powerful means of monitoring human activity without the need for direct contact. This talk will highlight how these technologies can be applied independently to recognize activities, detect falls, and identify hand gestures, demonstrating the versatility and robustness of multi-scale AI in human sensing. Through real-time data processing and context-aware models, these advancements pave the way for smarter, more intuitive systems in healthcare, assisted living, and beyond.
Keynote Speaker 5